Coefficient of variation
In probability theory and statistics, the coefficient of variation (CV) is a normalized measure of dispersion of a probability distribution. It is defined as the ratio of the standard deviation
to the mean
:

This is only defined for non-zero mean, and is most useful for variables that are always positive. It is also known as unitized risk or the variation coefficient. It is expressed as percentage.
The coefficient of variation should only be computed for data measured on a ratio scale. As an example, if a group of temperatures are analyzed, the standard deviation does not depend on whether the Kelvin or Celsius scale is used since an object that changes its temperature by 1 K also changes its temperature by 1 C. However the mean temperature of the data set would differ in each scale by an amount of 273 and thus the coefficient of variation would differ. So the coefficient of variation does not have any meaning for data on an interval scale.[1]
Standardized moments are similar ratios,
, which are also dimensionless and scale invariant. The variance-to-mean ratio,
, is another similar ratio, but is not dimensionless, and hence not scale invariant.
See Normalization (statistics) for further ratios.
In signal processing, particularly image processing, the reciprocal ratio
is referred to as the signal to noise ratio.
Comparison to standard deviation
Advantages
The coefficient of variation is useful because the standard deviation of data must always be understood in the context of the mean of the data. The coefficient of variation is a dimensionless number. So when comparing between data sets with different units or widely different means, one should use the coefficient of variation for comparison instead of the standard deviation.
Disadvantages
- When the mean value is near zero, the coefficient of variation is sensitive to small changes in the mean, limiting its usefulness.
- Unlike the standard deviation, it cannot be used to construct confidence intervals for the mean.
Applications
The coefficient of variation is also common in applied probability fields such as renewal theory, queueing theory, and reliability theory. In these fields, the exponential distribution is often more important than the normal distribution. The standard deviation of an exponential distribution is equal to its mean, so its coefficient of variation is equal to 1. Distributions with CV < 1 (such as an Erlang distribution) are considered low-variance, while those with CV > 1 (such as a hyper-exponential distribution) are considered high-variance. Some formulas in these fields are expressed using the squared coefficient of variation, often abbreviated SCV. In modeling, a variation of the CV is the CV(RMSD). Essentially the CV(RMSD) replaces the standard deviation term with the Root Mean Square Deviation (RMSD).
Distribution
Under weak conditions on the sample distribution, the probability distribution of the coefficient of variation is known. In fact, it has been determined by Hendricks and Robey [2]. This is useful, for instance, in the construction of hypothesis tests or confidence intervals.
See also
- Normalization (statistics)
Similar ratios
External links
References
Statistics |
|
Descriptive statistics |
|
Continuous data
|
Location
|
|
|
Dispersion
|
|
|
Shape
|
|
|
|
Count data
|
Index of dispersion
|
|
Summary tables
|
Grouped data · Frequency distribution · Contingency table
|
|
|
Pearson product-moment correlation · Rank correlation ( Spearman's rho, Kendall's tau) · Partial correlation · Scatter plot
|
|
Statistical graphics
|
Bar chart · Biplot · Box plot · Control chart · Correlogram · Forest plot · Histogram · Q-Q plot · Run chart · Scatter plot · Stemplot · Radar chart
|
|
|
|
Data collection |
|
Designing studies
|
Effect size · Standard error · Statistical power · Sample size determination
|
|
Survey methodology
|
|
|
|
Design of experiments · Randomized experiment · Random assignment · Replication · Blocking · Regression discontinuity · Optimal design
|
|
Uncontrolled studies
|
Natural experiment · Quasi-experiment · Observational study
|
|
|
|
Statistical inference |
|
|
Bayesian probability · Prior · Posterior · Credible interval · Bayes factor · Bayesian estimator · Maximum posterior estimator
|
|
Frequentist inference
|
|
|
Specific tests
|
Z-test (normal) · Student's t-test · F-test · Chi-square test · Pearson's chi-square · Wald test · Mann–Whitney U · Shapiro–Wilk · Signed-rank · Likelihood-ratio
|
|
|
Mean-unbiased · Median-unbiased · Maximum likelihood · Method of moments · Minimum distance · Maximum spacing · Density estimation
|
|
|
|
Correlation and regression analysis |
|
|
Pearson product-moment correlation · Partial correlation · Confounding variable · Coefficient of determination
|
|
|
Errors and residuals · Regression model validation · Mixed effects models · Simultaneous equations models
|
|
|
Simple linear regression · Ordinary least squares · General linear model · Bayesian regression
|
|
Non-standard predictors
|
Nonlinear regression · Nonparametric · Semiparametric · Isotonic · Robust
|
|
Generalized linear model
|
Exponential families · Logistic (Bernoulli) · Binomial · Poisson
|
|
Formal analyses
|
|
|
|
|
Data analyses and models for other specific data types |
|
|
Multivariate statistics
|
|
|
|
Decomposition · Trend estimation · Box–Jenkins · ARMA models · Spectral density estimation
|
|
Survival analysis
|
Survival function · Kaplan–Meier · Logrank test · Failure rate · Proportional hazards models · Accelerated failure time model
|
|
Categorical data
|
McNemar's test · Cohen's kappa
|
|
|
|
Applications |
|
Engineering statistics
|
Methods engineering · Probabilistic design · Process & Quality control · Reliability · System identification
|
|
Environmental statistics
|
|
|
Medical statistics
|
|
|
Social statistics
|
|
|
|
|
Category · Portal · Outline · Index |
|